25 - Artificial Intelligence II [ID:9439]
50 von 612 angezeigt

So welcome to the last week of AI. What I want to do this week is finish up with supervised

learning and give you a very brief glimpse of unsupervised learning and then kind of

wrap up. And I'll answer questions at any point. For those of you who are curious about

when we will actually have the exam, my last information was that I was asked when would

you like it? And I said on the 17th, according to the poll, at 2 o'clock. And that was the

last I heard. So I'm assuming that we'll have a room which will be announced beforehand.

And I've just heard this morning the Prüfungsamt called me, could you tell me when your exam

is? Because apparently Mr. Hoffman who actually makes the dates is ill. So now you have exactly

as much information as I do. So I hope that we'll have a room and that I'll see all of

you there. I think the time has been announced on the forum so you should all know and we'll

just do our thing. Okay. The second disaster of this morning was I accidentally deleted

all my intermediate files of the slides, which means my attempt of generating and publishing

the slides was more difficult than I thought. It took me a couple more hours, which means

that I might run out of slides somewhere during this lecture. Don't know yet. I've been busy

actually catching up. So we'll see. So what did we do? We looked at inverse resolution,

the last bits of knowledge in learning. And the idea of inverse resolution is really very,

very simple. What you want to do is you want to basically deduce the classifications, remember

positive or negative from the background, of course, that's what we want to inject into

learning. That's the whole point of the thing. Plus the hypothesis we currently have, plus

of course the descriptions. And if those entail the classifications, then of course there

must be a proof. And the idea is to look at that proof just like in explanation-based

learning, but not by making a proof in full and then deriving information or rules in

that case from it. What we do now is we do proof search essentially backwards. And during

that time we learn. And that has the advantage by that we can actually invent predicates.

That's the main kind of call to fame of inverse resolution that we can do this. We've looked

at resolution and the nice thing here is you can run a resolution proof backwards by just

basically taking the end, which is the empty clause, which since we're learning rules,

I've put into this true, equal, true implies false form. And then you have the examples

and then you can essentially backchain, chain with these using backwards resolution and then

eventually generalize, generalize, generalize to these general rules. So normally in resolution

what you do is you start with the general rules and instantiate them with the information you

have on the situation, come up with an empty clause and then you've proven something, usually

something observed or predictable about the situation. And here we do exactly the same

thing backwards because we don't have the general rules yet, okay, but they appear in this backwards

made proof they appear in the situation in the same spot where we would have had them before.

Okay, the problem here is that if the combinatorics of resolutions are bad,

the combinatorics of backwards resolution is much, much, much, much worse.

Why is that? Well, for instance, we can use unification, which is a deterministic thing.

In resolution, we have to use something called anti-unification, which kind of goes backwards,

which has a huge search space by going backwards. Okay, so the trick here is to somehow control the

combinatorics. And basically what you do is you let go of everything that's expensive. Function

symbols, bad for unification, right? They make all kinds of trouble in anti-unification. Go away.

Full resolution, bah, go away. Horton clauses are nice, they kind of give you some kind of control,

exactly the control we used in logic programming. People can predict what happens there,

which is why we're using Prolog as a programming language, not like a theorem prover. So you do

a couple of things, which is essentially the upshot of that, is that you're getting inductive logic

programming, but you're not getting inductive first-order logic. And the kind of paradigmatic

thing here is that you can, in backwards resolution, forwards resolution kind of gets rid of certain

literals, which means if you do it backwards, you have to invent a literal, a cut literal. And when

you have to invent it, you can take one of the old ones, one of the known ones, or you can just

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:20:51 Min

Aufnahmedatum

2018-07-11

Hochgeladen am

2018-07-12 11:06:02

Sprache

en-US

Dieser Kurs beschäftigt sich mit den Grundlagen der Künstlichen Intelligenz (KI), insbesondere mit Techniken des Schliessens unter Unsicherheit, des maschinellen Lernens und dem Sprachverstehen. 
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter. 

Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz

  • Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.

  • Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).

  • Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen